Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 16.246
Filter
1.
J Diabetes ; 16(5): e13553, 2024 May.
Article in English | MEDLINE | ID: mdl-38664882

ABSTRACT

BACKGROUND: Prediabetes management is a priority for policymakers globally, to avoid/delay type 2 diabetes (T2D) and reduce severe, costly health consequences. Countries moving from low to middle income are most at risk from the T2D "epidemic" and may find implementing preventative measures challenging; yet prevention has largely been evaluated in developed countries. METHODS: Markov cohort simulations explored costs and benefits of various prediabetes management approaches, expressed as "savings" to the public health care system, for three countries with high prediabetes prevalence and contrasting economic status (Poland, Saudi Arabia, Vietnam). Two scenarios were compared up to 15 y: "inaction" (no prediabetes intervention) and "intervention" with metformin extended release (ER), intensive lifestyle change (ILC), ILC with metformin (ER), or ILC with metformin (ER) "titration." RESULTS: T2D was the highest-cost health state at all time horizons due to resource use, and inaction produced the highest T2D costs, ranging from 9% to 34% of total health care resource costs. All interventions reduced T2D versus inaction, the most effective being ILC + metformin (ER) "titration" (39% reduction at 5 y). Metformin (ER) was the only strategy that produced net saving across the time horizon; however, relative total health care system costs of other interventions vs inaction declined over time up to 15 y. Viet Nam was most sensitive to cost and parameter changes via a one-way sensitivity analysis. CONCLUSIONS: Metformin (ER) and lifestyle interventions for prediabetes offer promise for reducing T2D incidence. Metformin (ER) could reduce T2D patient numbers and health care costs, given concerns regarding adherence in the context of funding/reimbursement challenges for lifestyle interventions.


Subject(s)
Diabetes Mellitus, Type 2 , Hypoglycemic Agents , Markov Chains , Metformin , Prediabetic State , Humans , Prediabetic State/economics , Prediabetic State/therapy , Prediabetic State/epidemiology , Diabetes Mellitus, Type 2/economics , Diabetes Mellitus, Type 2/epidemiology , Diabetes Mellitus, Type 2/prevention & control , Metformin/therapeutic use , Metformin/economics , Vietnam/epidemiology , Hypoglycemic Agents/therapeutic use , Hypoglycemic Agents/economics , Saudi Arabia/epidemiology , Cost-Benefit Analysis , Cost Savings , Male , Female , Middle Aged , Life Style , Health Care Costs/statistics & numerical data
2.
Sci Rep ; 14(1): 9449, 2024 04 24.
Article in English | MEDLINE | ID: mdl-38658780

ABSTRACT

The historic evolution of global primary energy consumption (GPEC) mix, comprising of fossil (liquid petroleum, gaseous and coal fuels) and non-fossil (nuclear, hydro and other renewables) energy sources while highlighting the impact of the novel corona virus 2019 pandemic outbreak, has been examined through this study. GPEC data of 2005-2021 has been taken from the annually published reports by British Petroleum. The equilibrium state, a property of the classical predictive modeling based on Markov chain, is employed as an investigative tool. The pandemic outbreak has proved to be a blessing in disguise for global energy sector through, at least temporarily, reducing the burden on environment in terms of reducing demand for fossil energy sources. Some significant long term impacts of the pandemic occurred in second and third years (2021 and 2022) after its outbreak in 2019 rather than in first year (2020) like the penetration of other energy sources along with hydro and renewable ones in GPEC. Novelty of this research lies within the application of the equilibrium state feature of compositional Markov chain based prediction upon GPEC mix. The analysis into the past trends suggests the advancement towards a better global energy future comprising of cleaner fossil resources (mainly natural gas), along with nuclear, hydro and renewable ones in the long run.


Subject(s)
COVID-19 , Markov Chains , Pandemics , COVID-19/epidemiology , Humans , SARS-CoV-2/isolation & purification , Disease Outbreaks , Fossil Fuels , Energy-Generating Resources
3.
J Chem Inf Model ; 64(8): 3008-3020, 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38573053

ABSTRACT

Nuclear magnetic resonance (NMR) spectroscopy is an important analytical technique in synthetic organic chemistry, but its integration into high-throughput experimentation workflows has been limited by the necessity of manually analyzing the NMR spectra of new chemical entities. Current efforts to automate the analysis of NMR spectra rely on comparisons to databases of reported spectra for known compounds and, therefore, are incompatible with the exploration of new chemical space. By reframing the NMR spectrum of a reaction mixture as a joint probability distribution, we have used Hamiltonian Monte Carlo Markov Chain and density functional theory to fit the predicted NMR spectra to those of crude reaction mixtures. This approach enables the deconvolution and analysis of the spectra of mixtures of compounds without relying on reported spectra. The utility of our approach to analyze crude reaction mixtures is demonstrated with the experimental spectra of reactions that generate a mixture of isomers, such as Wittig olefination and C-H functionalization reactions. The correct identification of compounds in a reaction mixture and their relative concentrations is achieved with a mean absolute error as low as 1%.


Subject(s)
Proton Magnetic Resonance Spectroscopy , Monte Carlo Method , Markov Chains , Density Functional Theory
4.
PLoS Comput Biol ; 20(4): e1011993, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38557869

ABSTRACT

The intensification of intervention activities against the fatal vector-borne disease gambiense human African trypanosomiasis (gHAT, sleeping sickness) in the last two decades has led to a large decline in the number of annually reported cases. However, while we move closer to achieving the ambitious target of elimination of transmission (EoT) to humans, pockets of infection remain, and it becomes increasingly important to quantitatively assess if different regions are on track for elimination, and where intervention efforts should be focused. We present a previously developed stochastic mathematical model for gHAT in the Democratic Republic of Congo (DRC) and show that this same formulation is able to capture the dynamics of gHAT observed at the health area level (approximately 10,000 people). This analysis was the first time any stochastic gHAT model has been fitted directly to case data and allows us to better quantify the uncertainty in our results. The analysis focuses on utilising a particle filter Markov chain Monte Carlo (MCMC) methodology to fit the model to the data from 16 health areas of Mosango health zone in Kwilu province as a case study. The spatial heterogeneity in cases is reflected in modelling results, where we predict that under the current intervention strategies, the health area of Kinzamba II, which has approximately one third of the health zone's cases, will have the latest expected year for EoT. We find that fitting the analogous deterministic version of the gHAT model using MCMC has substantially faster computation times than fitting the stochastic model using pMCMC, but produces virtually indistinguishable posterior parameterisation. This suggests that expanding health area fitting, to cover more of the DRC, should be done with deterministic fits for efficiency, but with stochastic projections used to capture both the parameter and stochastic variation in case reporting and elimination year estimations.


Subject(s)
Trypanosomiasis, African , Animals , Humans , Trypanosomiasis, African/epidemiology , Democratic Republic of the Congo/epidemiology , Models, Theoretical , Forecasting , Markov Chains , Trypanosoma brucei gambiense
5.
Bull Math Biol ; 86(6): 61, 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38662288

ABSTRACT

In this paper, we presented a mathematical model for tuberculosis with treatment for latent tuberculosis cases and incorporated social implementations based on the impact they will have on tuberculosis incidence, cure, and recovery. We incorporated two variables containing the accumulated deaths and active cases into the model in order to study the incidence and mortality rate per year with the data reported by the model. Our objective is to study the impact of social program implementations and therapies on latent tuberculosis in particular the use of once-weekly isoniazid-rifapentine for 12 weeks (3HP). The computational experimentation was performed with data from Brazil and for model calibration, we used the Markov Chain Monte Carlo method (MCMC) with a Bayesian approach. We studied the effect of increasing the coverage of social programs, the Bolsa Familia Programme (BFP) and the Family Health Strategy (FHS) and the implementation of the 3HP as a substitution therapy for two rates of diagnosis and treatment of latent at 1% and 5%. Based of the data obtained by the model in the period 2023-2035, the FHS reported better results than BFP in the case of social implementations and 3HP with a higher rate of diagnosis and treatment of latent in the reduction of incidence and mortality rate and in cases and deaths avoided. With the objective of linking the social and biomedical implementations, we constructed two different scenarios with the rate of diagnosis and treatment. We verified with results reported by the model that with the social implementations studied and the 3HP with the highest rate of diagnosis and treatment of latent, the best results were obtained in comparison with the other independent and joint implementations. A reduction of the incidence by 36.54% with respect to the model with the current strategies and coverage was achieved, and a greater number of cases and deaths from tuberculosis was avoided.


Subject(s)
Antitubercular Agents , Bayes Theorem , Isoniazid , Latent Tuberculosis , Markov Chains , Mathematical Concepts , Monte Carlo Method , Rifampin , Humans , Brazil/epidemiology , Incidence , Isoniazid/administration & dosage , Antitubercular Agents/administration & dosage , Rifampin/administration & dosage , Rifampin/analogs & derivatives , Rifampin/therapeutic use , Latent Tuberculosis/epidemiology , Latent Tuberculosis/drug therapy , Latent Tuberculosis/mortality , Models, Biological , Tuberculosis/mortality , Tuberculosis/epidemiology , Tuberculosis/drug therapy , Computer Simulation
6.
BMC Bioinformatics ; 25(1): 151, 2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38627634

ABSTRACT

BACKGROUND: Genomes are inherently inhomogeneous, with features such as base composition, recombination, gene density, and gene expression varying along chromosomes. Evolutionary, biological, and biomedical analyses aim to quantify this variation, account for it during inference procedures, and ultimately determine the causal processes behind it. Since sequential observations along chromosomes are not independent, it is unsurprising that autocorrelation patterns have been observed e.g., in human base composition. In this article, we develop a class of Hidden Markov Models (HMMs) called oHMMed (ordered HMM with emission densities, the corresponding R package of the same name is available on CRAN): They identify the number of comparably homogeneous regions within autocorrelated observed sequences. These are modelled as discrete hidden states; the observed data points are realisations of continuous probability distributions with state-specific means that enable ordering of these distributions. The observed sequence is labelled according to the hidden states, permitting only neighbouring states that are also neighbours within the ordering of their associated distributions. The parameters that characterise these state-specific distributions are inferred. RESULTS: We apply our oHMMed algorithms to the proportion of G and C bases (modelled as a mixture of normal distributions) and the number of genes (modelled as a mixture of poisson-gamma distributions) in windows along the human, mouse, and fruit fly genomes. This results in a partitioning of the genomes into regions by statistically distinguishable averages of these features, and in a characterisation of their continuous patterns of variation. In regard to the genomic G and C proportion, this latter result distinguishes oHMMed from segmentation algorithms based in isochore or compositional domain theory. We further use oHMMed to conduct a detailed analysis of variation of chromatin accessibility (ATAC-seq) and epigenetic markers H3K27ac and H3K27me3 (modelled as a mixture of poisson-gamma distributions) along the human chromosome 1 and their correlations. CONCLUSIONS: Our algorithms provide a biologically assumption free approach to characterising genomic landscapes shaped by continuous, autocorrelated patterns of variation. Despite this, the resulting genome segmentation enables extraction of compositionally distinct regions for further downstream analyses.


Subject(s)
Genome , Genomics , Animals , Humans , Mice , Markov Chains , Base Composition , Probability , Algorithms
7.
Brief Bioinform ; 25(3)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38628114

ABSTRACT

Spatial transcriptomics (ST) has become a powerful tool for exploring the spatial organization of gene expression in tissues. Imaging-based methods, though offering superior spatial resolutions at the single-cell level, are limited in either the number of imaged genes or the sensitivity of gene detection. Existing approaches for enhancing ST rely on the similarity between ST cells and reference single-cell RNA sequencing (scRNA-seq) cells. In contrast, we introduce stDiff, which leverages relationships between gene expression abundance in scRNA-seq data to enhance ST. stDiff employs a conditional diffusion model, capturing gene expression abundance relationships in scRNA-seq data through two Markov processes: one introducing noise to transcriptomics data and the other denoising to recover them. The missing portion of ST is predicted by incorporating the original ST data into the denoising process. In our comprehensive performance evaluation across 16 datasets, utilizing multiple clustering and similarity metrics, stDiff stands out for its exceptional ability to preserve topological structures among cells, positioning itself as a robust solution for cell population identification. Moreover, stDiff's enhancement outcomes closely mirror the actual ST data within the batch space. Across diverse spatial expression patterns, our model accurately reconstructs them, delineating distinct spatial boundaries. This highlights stDiff's capability to unify the observed and predicted segments of ST data for subsequent analysis. We anticipate that stDiff, with its innovative approach, will contribute to advancing ST imputation methodologies.


Subject(s)
Benchmarking , Gene Expression Profiling , Cluster Analysis , Diffusion , Markov Chains , Sequence Analysis, RNA , Transcriptome
8.
BMC Oral Health ; 24(1): 483, 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38649858

ABSTRACT

BACKGROUND: Root caries are prevalent issues that affect dental health, particularly among elderly individuals with exposed root surfaces. Fluoride therapy has shown effectiveness in preventing root caries, but limited studies have addressed its cost-effectiveness in elderly persons population. This study aimed to evaluate the cost-effectiveness of a fluoride treatment program for preventing root caries in elderly persons within the context of Chinese public healthcare. METHODS: A Markov simulation model was adopted for the cost-effectiveness analysis in a hypothetical scenario from a healthcare system perspective. A 60-year-old subject with 23 teeth was simulated for 20 years. A 5% sodium fluoride varnish treatment was compared with no preventive intervention in terms of effectiveness and cost. Tooth years free of root caries were set as the effect. Transition probabilities were estimated from the data of a community-based cohort and published studies, and costs were based on documents published by the government. The incremental cost-effectiveness ratio (ICER) was calculated to evaluate cost-effectiveness. Univariate and probabilistic sensitivity analyses were performed to evaluate the influence of data uncertainty. RESULTS: Fluoride treatment was more effective (with a difference of 10.20 root caries-free tooth years) but also more costly (with a difference of ¥1636.22). The ICER was ¥160.35 per root caries-free tooth year gained. One-way sensitivity analysis showed that the risk ratio of root caries in the fluoride treatment group influenced the result most. In the probabilistic sensitivity analysis, fluoride treatment was cost-effective in 70.5% of the simulated cases. CONCLUSIONS: Regular 5% sodium fluoride varnish application was cost-effective for preventing root caries in the elderly persons in most scenarios with the consideration of data uncertainty, but to a limited extent. Improved public dental health awareness may reduce the incremental cost and make the intervention more cost-effective. Overall, the study shed light on the economic viability and impact of such preventive interventions, providing a scientific basis for dental care policies and healthcare resource allocation.


Subject(s)
Cariostatic Agents , Cost-Benefit Analysis , Fluorides, Topical , Markov Chains , Root Caries , Sodium Fluoride , Humans , Root Caries/prevention & control , Root Caries/economics , Fluorides, Topical/therapeutic use , Fluorides, Topical/economics , Middle Aged , Sodium Fluoride/therapeutic use , Sodium Fluoride/economics , Sodium Fluoride/administration & dosage , Cariostatic Agents/therapeutic use , Cariostatic Agents/economics , Cariostatic Agents/administration & dosage , China , Aged , Cost-Effectiveness Analysis
9.
Pharmacotherapy ; 44(4): 331-342, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38576238

ABSTRACT

BACKGROUND: Patients with Crohn's disease (CD) who lose response to biologics experience reduced quality of life (QoL) and costly hospitalizations. Precision-guided dosing (PGD) provides a comprehensive pharmacokinetic (PK) profile that allows for biologic dosing to be personalized. We analyzed the cost-effectiveness of infliximab (IFX) PGD relative to two other dose intensification strategies (DIS). METHODS: We developed a hybrid (Markov and decision tree) model of patients with CD who had a clinical response to IFX induction. The analysis had a US payer perspective, a base case time horizon of 5 years, and a 4-week cycle length. There were three IFX dosing comparators: PGD; dose intensification based on symptoms, inflammatory markers, and trough IFX concentration (DIS1); and dose intensification based on symptoms alone (DIS2). Patients that failed IFX initiated ustekinumab, followed by vedolizumab, and conventional therapy. Transition probabilities for IFX were estimated from real-world clinical PK data and interventional clinical trial patient-level data. All other transition probabilities were derived from published randomized clinical trials and cost-effectiveness analyses. Utility values were sourced from previous health technology assessments. Direct costs included biologic acquisition and infusion, surgeries and procedures, conventional therapy, and lab testing. The primary outcomes were incremental cost-effectiveness ratios (ICERs). The robustness of results was assessed via one-way sensitivity, scenario, and probabilistic sensitivity analyses (PSA). RESULTS: PGD was the cost-effective IFX dosing strategy with an ICER of 122,932 $ per quality-adjusted life year (QALY) relative to DIS1 and dominating DIS2. PGD had the lowest percentage (1.1%) of patients requiring a new biologic through 5 years (8.9% and 74.4% for DIS1 and DIS2, respectively). One-way sensitivity analysis demonstrated that the cost-effectiveness of PGD was most sensitive to the time between IFX doses. PSA demonstrated that joint parameter uncertainty had moderate impact on some results. CONCLUSIONS: PGD provides clinical and QoL benefits by maintaining remission and avoiding IFX failure; it is the most cost-effective under conservative assumptions.


Subject(s)
Cost-Benefit Analysis , Crohn Disease , Gastrointestinal Agents , Infliximab , Humans , Infliximab/administration & dosage , Infliximab/economics , Infliximab/therapeutic use , Crohn Disease/drug therapy , Adult , Gastrointestinal Agents/administration & dosage , Gastrointestinal Agents/economics , Gastrointestinal Agents/therapeutic use , Quality-Adjusted Life Years , Decision Trees , Markov Chains , Dose-Response Relationship, Drug , Quality of Life , Precision Medicine
10.
PLoS One ; 19(4): e0295074, 2024.
Article in English | MEDLINE | ID: mdl-38578763

ABSTRACT

This work derives a theoretical value for the entropy of a Linear Additive Markov Process (LAMP), an expressive but simple model able to generate sequences with a given autocorrelation structure. Our research establishes that the theoretical entropy rate of a LAMP model is equivalent to the theoretical entropy rate of the underlying first-order Markov Chain. The LAMP model captures complex relationships and long-range dependencies in data with similar expressibility to a higher-order Markov process. While a higher-order Markov process has a polynomial parameter space, a LAMP model is characterised only by a probability distribution and the transition matrix of an underlying first-order Markov Chain. This surprising result can be explained by the information balance between the additional structure imposed by the next state distribution of the LAMP model, and the additional randomness of each new transition. Understanding the entropy of the LAMP model provides a tool to model complex dependencies in data while retaining useful theoretical results. To emphasise the practical applications, we use the LAMP model to estimate the entropy rate of the LastFM, BrightKite, Wikispeedia and Reuters-21578 datasets. We compare estimates calculated using frequency probability estimates, a first-order Markov model and the LAMP model, also considering two approaches to ensure the transition matrix is irreducible. In most cases the LAMP entropy rates are lower than those of the alternatives, suggesting that LAMP model is better at accommodating structural dependencies in the processes, achieving a more accurate estimate of the true entropy.


Subject(s)
Algorithms , Markov Chains , Entropy , Probability , Linear Models
11.
BMC Med Res Methodol ; 24(1): 86, 2024 Apr 08.
Article in English | MEDLINE | ID: mdl-38589783

ABSTRACT

Prostate cancer is the most common cancer after non-melanoma skin cancer and the second leading cause of cancer deaths in US men. Its incidence and mortality rates vary substantially across geographical regions and over time, with large disparities by race, geographic regions (i.e., Appalachia), among others. The widely used Cox proportional hazards model is usually not applicable in such scenarios owing to the violation of the proportional hazards assumption. In this paper, we fit Bayesian accelerated failure time models for the analysis of prostate cancer survival and take dependent spatial structures and temporal information into account by incorporating random effects with multivariate conditional autoregressive priors. In particular, we relax the proportional hazards assumption, consider flexible frailty structures in space and time, and also explore strategies for handling the temporal variable. The parameter estimation and inference are based on a Monte Carlo Markov chain technique under a Bayesian framework. The deviance information criterion is used to check goodness of fit and to select the best candidate model. Extensive simulations are performed to examine and compare the performances of models in different contexts. Finally, we illustrate our approach by using the 2004-2014 Pennsylvania Prostate Cancer Registry data to explore spatial-temporal heterogeneity in overall survival and identify significant risk factors.


Subject(s)
Models, Statistical , Prostatic Neoplasms , Male , Humans , Bayes Theorem , Routinely Collected Health Data , Proportional Hazards Models , Markov Chains
12.
Clin Infect Dis ; 78(Supplement_2): S146-S152, 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38662703

ABSTRACT

Globally, there are over 1 billion people infected with soil-transmitted helminths (STHs), mostly living in marginalized settings with inadequate sanitation in sub-Saharan Africa and Southeast Asia. The World Health Organization recommends an integrated approach to STH morbidity control through improved access to sanitation and hygiene education and the delivery of preventive chemotherapy (PC) to school-age children delivered through schools. Progress of STH control programs is currently estimated using a baseline (pre-PC) school-based prevalence survey and then monitored using periodical school-based prevalence surveys, known as Impact Assessment Surveys (IAS). We investigated whether integrating geostatistical methods with a Markov model or a mechanistic transmission model for projecting prevalence forward in time from baseline can improve IAS design strategies. To do this, we applied these 2 methods to prevalence data collected in Kenya, before evaluating and comparing their performance in accurately informing optimal survey design for a range of IAS sampling designs. We found that, although both approaches performed well, the mechanistic method more accurately projected prevalence over time and provided more accurate information for guiding survey design. Both methods performed less well in areas with persistent STH hotspots where prevalence did not decrease despite multiple rounds of PC. Our findings show that these methods can be useful tools for more efficient and accurate targeting of PC. The general framework built in this paper can also be used for projecting prevalence and informing survey design for other neglected tropical diseases.


Subject(s)
Helminthiasis , Markov Chains , Soil , Humans , Helminthiasis/epidemiology , Helminthiasis/transmission , Prevalence , Kenya/epidemiology , Soil/parasitology , Child , Helminths/isolation & purification , Animals , Models, Statistical , Adolescent , Schools
13.
Biom J ; 66(3): e2300279, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38576312

ABSTRACT

Reduced major axis (RMA) regression, widely used in the fields of zoology, botany, ecology, biology, spectroscopy, and among others, outweighs the ordinary least square regression by relaxing the assumption that the covariates are without measurement errors. A Bayesian implementation of the RMA regression is presented in this paper, and the equivalence of the estimates of the parameters under the Bayesian and the frequentist frameworks is proved. This model-based Bayesian RMA method is advantageous since the posterior estimates, the standard deviations, as well as the credible intervals of the estimates can be obtained through Markov chain Monte Carlo methods directly. In addition, it is straightforward to extend to the multivariate RMA case. The performance of the Bayesian RMA approach is evaluated in the simulation study, and, finally, the proposed method is applied to analyze a dataset in the plantation.


Subject(s)
Ecology , Bayes Theorem , Computer Simulation , Markov Chains , Monte Carlo Method
14.
Sci Rep ; 14(1): 5694, 2024 03 08.
Article in English | MEDLINE | ID: mdl-38459084

ABSTRACT

Besides achieving high quality products, statistical techniques are applied in many fields associated with health such as medicine, biology and etc. Adhering to the quality performance of an item to the desired level is a very important issue in various fields. Process capability indices play a vital role in evaluating the performance of an item. In this paper, the larger-the-better process capability index for the three-parameter Omega model based on progressive type-II censoring sample is calculated. On the basis of progressive type-II censoring the statistical inference about process capability index is carried out through the maximum likelihood. Also, the confidence interval is proposed and the hypothesis test for estimating the lifetime performance of products. Gibbs within Metropolis-Hasting samplers procedure is used for performing Markov Chain Monte Carlo (MCMC) technique to achieve Bayes estimation for unknown parameters. Simulation study is calculated to show that Omega distribution's performance is more effective. At the end of this paper, there are two real-life applications, one of them is about high-performance liquid chromatography (HPLC) data of blood samples from organ transplant recipients. The other application is about real-life data of ball bearing data. These applications are used to illustrate the importance of Omega distribution in lifetime data analysis.


Subject(s)
Bayes Theorem , Computer Simulation , Markov Chains , Monte Carlo Method
15.
PeerJ ; 12: e16509, 2024.
Article in English | MEDLINE | ID: mdl-38426131

ABSTRACT

Step-selection models are widely used to study animals' fine-scale habitat selection based on movement data. Resource preferences and movement patterns, however, often depend on the animal's unobserved behavioral states, such as resting or foraging. As this is ignored in standard (integrated) step-selection analyses (SSA, iSSA), different approaches have emerged to account for such states in the analysis. The performance of these approaches and the consequences of ignoring the states in step-selection analysis, however, have rarely been quantified. We evaluate the recent idea of combining iSSAs with hidden Markov models (HMMs), which allows for a joint estimation of the unobserved behavioral states and the associated state-dependent habitat selection. Besides theoretical considerations, we use an extensive simulation study and a case study on fine-scale interactions of simultaneously tracked bank voles (Myodes glareolus) to compare this HMM-iSSA empirically to both the standard and a widely used classification-based iSSA (i.e., a two-step approach based on a separate prior state classification). Moreover, to facilitate its use, we implemented the basic HMM-iSSA approach in the R package HMMiSSA available on GitHub.


Subject(s)
Ecosystem , Movement , Animals , Markov Chains , Computer Simulation
16.
PLoS One ; 19(3): e0297755, 2024.
Article in English | MEDLINE | ID: mdl-38427677

ABSTRACT

The high-quality development of service industry has become an important engine for promoting sustainable economic development. This paper first constructed the evaluation index system of high-quality development of service industry, based on panel data from 2005 to 2020. Second, Kernel density, Markov chain and Dagum Gini coefficient were used to represent the regional differences and dynamic evolution of service industry, and the Koo method was used to explore the characteristics of spatial agglomeration. Finally, social network analysis was used to identify core indicators. The study found that: (1) From 2005 to 2020, the overall level of service industry first decreases and then increases, with Chengdu and Chongqing leading other cities. (2) The development of service industry in the CCEC has large spatial differences, mainly due to inter-regional differences. (3) The level of spatial agglomeration is less variable, with high agglomeration mainly in Chengdu. (4) Indicators such as the level of human capital are the core factors of its high-quality development. This study is of great theoretical and practical significance for the optimization and upgrading of service industry in the CCEC and the synergetic development of the region.


Subject(s)
Industry , Sustainable Development , Humans , Cities , Markov Chains , China , Economic Development
17.
J Am Med Inform Assoc ; 31(5): 1093-1101, 2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38472144

ABSTRACT

OBJECTIVE: To introduce 2 R-packages that facilitate conducting health economics research on OMOP-based data networks, aiming to standardize and improve the reproducibility, transparency, and transferability of health economic models. MATERIALS AND METHODS: We developed the software tools and demonstrated their utility by replicating a UK-based heart failure data analysis across 5 different international databases from Estonia, Spain, Serbia, and the United States. RESULTS: We examined treatment trajectories of 47 163 patients. The overall incremental cost-effectiveness ratio (ICER) for telemonitoring relative to standard of care was 57 472 €/QALY. Country-specific ICERs were 60 312 €/QALY in Estonia, 58 096 €/QALY in Spain, 40 372 €/QALY in Serbia, and 90 893 €/QALY in the US, which surpassed the established willingness-to-pay thresholds. DISCUSSION: Currently, the cost-effectiveness analysis lacks standard tools, is performed in ad-hoc manner, and relies heavily on published information that might not be specific for local circumstances. Published results often exhibit a narrow focus, central to a single site, and provide only partial decision criteria, limiting their generalizability and comprehensive utility. CONCLUSION: We created 2 R-packages to pioneer cost-effectiveness analysis in OMOP CDM data networks. The first manages state definitions and database interaction, while the second focuses on Markov model learning and profile synthesis. We demonstrated their utility in a multisite heart failure study, comparing telemonitoring and standard care, finding telemonitoring not cost-effective.


Subject(s)
Cost-Effectiveness Analysis , Heart Failure , Humans , United States , Cost-Benefit Analysis , Reproducibility of Results , Models, Economic , Heart Failure/therapy , Markov Chains
18.
BMC Infect Dis ; 24(1): 351, 2024 Mar 26.
Article in English | MEDLINE | ID: mdl-38532346

ABSTRACT

PURPOSE: This study aims to evaluate the effectiveness of mitigation strategies and analyze the impact of human behavior on the transmission of Mpox. The results can provide guidance to public health authorities on comprehensive prevention and control for the new Mpox virus strain in the Democratic Republic of Congo as of December 2023. METHODS: We develop a two-layer Watts-Strogatz network model. The basic reproduction number is calculated using the next-generation matrix approach. Markov chain Monte Carlo (MCMC) optimization algorithm is used to fit Mpox cases in Canada into the network model. Numerical simulations are used to assess the impact of mitigation strategies and human behavior on the final epidemic size. RESULTS: Our results show that the contact transmission rate of low-risk groups and susceptible humans increases when the contact transmission rate of high-risk groups and susceptible humans is controlled as the Mpox epidemic spreads. The contact transmission rate of high-risk groups after May 18, 2022, is approximately 20% lower than that before May 18, 2022. Our findings indicate a positive correlation between the basic reproduction number and the level of heterogeneity in human contacts, with the basic reproduction number estimated at 2.3475 (95% CI: 0.0749-6.9084). Reducing the average number of sexual contacts to two per week effectively reduces the reproduction number to below one. CONCLUSION: We need to pay attention to the re-emergence of the epidemics caused by low-risk groups when an outbreak dominated by high-risk groups is under control. Numerical simulations show that reducing the average number of sexual contacts to two per week is effective in slowing down the rapid spread of the epidemic. Our findings offer guidance for the public health authorities of the Democratic Republic of Congo in developing effective mitigation strategies.


Subject(s)
Epidemics , Monkeypox , Humans , Epidemics/prevention & control , Disease Outbreaks , Basic Reproduction Number , Markov Chains
19.
Neural Netw ; 174: 106246, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38547801

ABSTRACT

The agent learns to organize decision behavior to achieve a behavioral goal, such as reward maximization, and reinforcement learning is often used for this optimization. Learning an optimal behavioral strategy is difficult under the uncertainty that events necessary for learning are only partially observable, called as Partially Observable Markov Decision Process (POMDP). However, the real-world environment also gives many events irrelevant to reward delivery and an optimal behavioral strategy. The conventional methods in POMDP, which attempt to infer transition rules among the entire observations, including irrelevant states, are ineffective in such an environment. Supposing Redundantly Observable Markov Decision Process (ROMDP), here we propose a method for goal-oriented reinforcement learning to efficiently learn state transition rules among reward-related "core states" from redundant observations. Starting with a small number of initial core states, our model gradually adds new core states to the transition diagram until it achieves an optimal behavioral strategy consistent with the Bellman equation. We demonstrate that the resultant inference model outperforms the conventional method for POMDP. We emphasize that our model only containing the core states has high explainability. Furthermore, the proposed method suits online learning as it suppresses memory consumption and improves learning speed.


Subject(s)
Goals , Learning , Reinforcement, Psychology , Reward , Markov Chains
20.
J Chem Phys ; 160(12)2024 Mar 28.
Article in English | MEDLINE | ID: mdl-38516972

ABSTRACT

Protein conformational changes play crucial roles in their biological functions. In recent years, the Markov State Model (MSM) constructed from extensive Molecular Dynamics (MD) simulations has emerged as a powerful tool for modeling complex protein conformational changes. In MSMs, dynamics are modeled as a sequence of Markovian transitions among metastable conformational states at discrete time intervals (called lag time). A major challenge for MSMs is that the lag time must be long enough to allow transitions among states to become memoryless (or Markovian). However, this lag time is constrained by the length of individual MD simulations available to track these transitions. To address this challenge, we have recently developed Generalized Master Equation (GME)-based approaches, encoding non-Markovian dynamics using a time-dependent memory kernel. In this Tutorial, we introduce the theory behind two recently developed GME-based non-Markovian dynamic models: the quasi-Markov State Model (qMSM) and the Integrative Generalized Master Equation (IGME). We subsequently outline the procedures for constructing these models and provide a step-by-step tutorial on applying qMSM and IGME to study two peptide systems: alanine dipeptide and villin headpiece. This Tutorial is available at https://github.com/xuhuihuang/GME_tutorials. The protocols detailed in this Tutorial aim to be accessible for non-experts interested in studying the biomolecular dynamics using these non-Markovian dynamic models.


Subject(s)
Molecular Dynamics Simulation , Proteins , Markov Chains , Proteins/chemistry , Peptides , Dipeptides
SELECTION OF CITATIONS
SEARCH DETAIL
...